Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
BMC Med Res Methodol ; 21(1): 88, 2021 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-33906604

RESUMO

BACKGROUND: Crowdsourcing engages the help of large numbers of people in tasks, activities or projects, usually via the internet. One application of crowdsourcing is the screening of citations for inclusion in a systematic review. There is evidence that a 'Crowd' of non-specialists can reliably identify quantitative studies, such as randomized controlled trials, through the assessment of study titles and abstracts. In this feasibility study, we investigated crowd performance of an online, topic-based citation-screening task, assessing titles and abstracts for inclusion in a single mixed-studies systematic review. METHODS: This study was embedded within a mixed studies systematic review of maternity care, exploring the effects of training healthcare professionals in intrapartum cardiotocography. Citation-screening was undertaken via Cochrane Crowd, an online citizen science platform enabling volunteers to contribute to a range of tasks identifying evidence in health and healthcare. Contributors were recruited from users registered with Cochrane Crowd. Following completion of task-specific online training, the crowd and the review team independently screened 9546 titles and abstracts. The screening task was subsequently repeated with a new crowd following minor changes to the crowd agreement algorithm based on findings from the first screening task. We assessed the crowd decisions against the review team categorizations (the 'gold standard'), measuring sensitivity, specificity, time and task engagement. RESULTS: Seventy-eight crowd contributors completed the first screening task. Sensitivity (the crowd's ability to correctly identify studies included within the review) was 84% (N = 42/50), and specificity (the crowd's ability to correctly identify excluded studies) was 99% (N = 9373/9493). Task completion was 33 h for the crowd and 410 h for the review team; mean time to classify each record was 6.06 s for each crowd participant and 3.96 s for review team members. Replicating this task with 85 new contributors and an altered agreement algorithm found 94% sensitivity (N = 48/50) and 98% specificity (N = 9348/9493). Contributors reported positive experiences of the task. CONCLUSION: It might be feasible to recruit and train a crowd to accurately perform topic-based citation-screening for mixed studies systematic reviews, though resource expended on the necessary customised training required should be factored in. In the face of long review production times, crowd screening may enable a more time-efficient conduct of reviews, with minimal reduction of citation-screening accuracy, but further research is needed.


Assuntos
Crowdsourcing , Serviços de Saúde Materna , Estudos de Viabilidade , Feminino , Humanos , Programas de Rastreamento , Gravidez , Ensaios Clínicos Controlados Aleatórios como Assunto , Pesquisa , Revisões Sistemáticas como Assunto
2.
Vet Rec ; 187(12): e113, 2020 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-33288633

RESUMO

In early 2019, four stallions in the south of England tested positive for equine viral arteritis following routine prebreeding screening. Here, a team from Defra and the APHA describe the epidemiological investigation that was carried out to determine the origin of infection and the potential for its transmission across the country.


Assuntos
Arterite/veterinária , Doenças dos Cavalos/epidemiologia , Doenças dos Cavalos/virologia , Animais , Arterite/epidemiologia , Arterite/prevenção & controle , Arterite/virologia , Surtos de Doenças , Equartevirus , Doenças dos Cavalos/prevenção & controle , Cavalos , Masculino , Reino Unido/epidemiologia
3.
Br J Cancer ; 116(2): 237-245, 2017 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-27959886

RESUMO

BACKGROUND: Academic pathology suffers from an acute and growing lack of workforce resource. This especially impacts on translational elements of clinical trials, which can require detailed analysis of thousands of tissue samples. We tested whether crowdsourcing - enlisting help from the public - is a sufficiently accurate method to score such samples. METHODS: We developed a novel online interface to train and test lay participants on cancer detection and immunohistochemistry scoring in tissue microarrays. Lay participants initially performed cancer detection on lung cancer images stained for CD8, and we measured how extending a basic tutorial by annotated example images and feedback-based training affected cancer detection accuracy. We then applied this tutorial to additional cancer types and immunohistochemistry markers - bladder/ki67, lung/EGFR, and oesophageal/CD8 - to establish accuracy compared with experts. Using this optimised tutorial, we then tested lay participants' accuracy on immunohistochemistry scoring of lung/EGFR and bladder/p53 samples. RESULTS: We observed that for cancer detection, annotated example images and feedback-based training both improved accuracy compared with a basic tutorial only. Using this optimised tutorial, we demonstrate highly accurate (>0.90 area under curve) detection of cancer in samples stained with nuclear, cytoplasmic and membrane cell markers. We also observed high Spearman correlations between lay participants and experts for immunohistochemistry scoring (0.91 (0.78, 0.96) and 0.97 (0.91, 0.99) for lung/EGFR and bladder/p53 samples, respectively). CONCLUSIONS: These results establish crowdsourcing as a promising method to screen large data sets for biomarkers in cancer pathology research across a range of cancers and immunohistochemical stains.


Assuntos
Biomarcadores Tumorais/metabolismo , Crowdsourcing/métodos , Neoplasias/metabolismo , Análise Serial de Tecidos , Pesquisa Translacional Biomédica/métodos , Interpretação Estatística de Dados , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imuno-Histoquímica , Seleção de Pacientes
4.
EBioMedicine ; 2(7): 681-9, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26288840

RESUMO

BACKGROUND: Citizen science, scientific research conducted by non-specialists, has the potential to facilitate biomedical research using available large-scale data, however validating the results is challenging. The Cell Slider is a citizen science project that intends to share images from tumors with the general public, enabling them to score tumor markers independently through an internet-based interface. METHODS: From October 2012 to June 2014, 98,293 Citizen Scientists accessed the Cell Slider web page and scored 180,172 sub-images derived from images of 12,326 tissue microarray cores labeled for estrogen receptor (ER). We evaluated the accuracy of Citizen Scientist's ER classification, and the association between ER status and prognosis by comparing their test performance against trained pathologists. FINDINGS: The area under ROC curve was 0.95 (95% CI 0.94 to 0.96) for cancer cell identification and 0.97 (95% CI 0.96 to 0.97) for ER status. ER positive tumors scored by Citizen Scientists were associated with survival in a similar way to that scored by trained pathologists. Survival probability at 15 years were 0.78 (95% CI 0.76 to 0.80) for ER-positive and 0.72 (95% CI 0.68 to 0.77) for ER-negative tumors based on Citizen Scientists classification. Based on pathologist classification, survival probability was 0.79 (95% CI 0.77 to 0.81) for ER-positive and 0.71 (95% CI 0.67 to 0.74) for ER-negative tumors. The hazard ratio for death was 0.26 (95% CI 0.18 to 0.37) at diagnosis and became greater than one after 6.5 years of follow-up for ER scored by Citizen Scientists, and 0.24 (95% CI 0.18 to 0.33) at diagnosis increasing thereafter to one after 6.7 (95% CI 4.1 to 10.9) years of follow-up for ER scored by pathologists. INTERPRETATION: Crowdsourcing of the general public to classify cancer pathology data for research is viable, engages the public and provides accurate ER data. Crowdsourced classification of research data may offer a valid solution to problems of throughput requiring human input.


Assuntos
Neoplasias da Mama/patologia , Crowdsourcing , Patologia Molecular , Neoplasias da Mama/mortalidade , Feminino , Humanos , Estimativa de Kaplan-Meier , Modelos de Riscos Proporcionais , Curva ROC , Receptores de Estrogênio/metabolismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...